Goto

Collaborating Authors

 reconstruction method



61c00c07e6d27285e4b952e96cc65666-Paper-Conference.pdf

Neural Information Processing Systems

However, in practice, new reconstruction methods could improve performance for at least three other reasons: learning more about the distribution of stimuli, becoming better at reconstructing text or images in general, or exploiting weaknesses in current image and/or text evaluation metrics. Here we disentangle how much of the reconstruction is due to these other factors vs. productively using the neural recordings.



a1a2c3fed88e9b3ba5bc3625c074a04e-Paper.pdf

Neural Information Processing Systems

The three-dimensional reconstruction of multiple interacting humans given a monocular image is crucial for the general task of scene understanding, as capturing the subtleties of interaction is often the very reason for taking a picture.




Geo-Neus: Geometry-ConsistentNeuralImplicit SurfacesLearningforMulti-viewReconstruction

Neural Information Processing Systems

However, one key challenge remains: existing approaches lack explicit multi-view geometry constraints, hence usually fail to generate geometry-consistent surface reconstruction.


TFS-NeRF: Template-Free NeRF for Semantic 3D Reconstruction of Dynamic Scene

Neural Information Processing Systems

Despite advancements in Neural Implicit models for 3D surface reconstruction, handling dynamic environments with interactions between arbitrary rigid, non-rigid, or deformable entities remains challenging. The generic reconstruction methods adaptable to such dynamic scenes often require additional inputs like depth or optical flow or rely on pre-trained image features for reasonable outcomes. These methods typically use latent codes to capture frame-by-frame deformations. Another set of dynamic scene reconstruction methods, are entity-specific, mostly focusing on humans, and relies on template models. In contrast, some template-free methods bypass these requirements and adopt traditional LBS (Linear Blend Skinning) weights for a detailed representation of deformable object motions,although they involve complex optimizations leading to lengthy training times.


How Regularization Terms Make Invertible Neural Networks Bayesian Point Estimators

Heilenkötter, Nick

arXiv.org Artificial Intelligence

Whenever a quantity of interest cannot be observed directly but only through an indirect measurement process or in the presence of noise, one is faced with an inverse problem. To stabilize the reconstruction and mitigate the information loss inherent in the measurement, it is necessary to incorporate additional knowledge about the unknown data -- its prior distribution, which encodes what one expects the reconstruction to resemble, such as the characteristic features of natural images. Yet our ability to describe natural images in an explicit, algorithmic form remains quite limited. Fortunately, recent years have seen the emergence of data-driven approaches that enable the construction of priors directly from collections of representative samples. While these approaches often surpass classical methods in reconstruction quality, many of them lack theoretical guarantees and remain difficult to interpret. A promising direction explored recently [3, 4, 5, 21] involves invertible neural networks. Thanks to their bidirectional structure, a single network can simultaneously approximate the forward operator and serve as a reconstruction method, with stability ensured by the architecture itself. This hybrid use makes it possible to assess deviations from a known forward operator - or even replace it by a data-based version - while maintaining interpretability of the reconstruction process by the learned measurement model and vice versa. This dual capability is particularly relevant in applications where both high-fidelity reconstructions and a faithful representation of the measurement process are critical, such as scientific imaging and med-Preprint.


How Sampling Affects the Detectability of Machine-written texts: A Comprehensive Study

Dubois, Matthieu, Yvon, François, Piantanida, Pablo

arXiv.org Artificial Intelligence

As texts generated by Large Language Models (LLMs) are ever more common and often indistinguishable from human-written content, research on automatic text detection has attracted growing attention. Many recent detectors report near-perfect accuracy, often boasting AUROC scores above 99\%. However, these claims typically assume fixed generation settings, leaving open the question of how robust such systems are to changes in decoding strategies. In this work, we systematically examine how sampling-based decoding impacts detectability, with a focus on how subtle variations in a model's (sub)word-level distribution affect detection performance. We find that even minor adjustments to decoding parameters - such as temperature, top-p, or nucleus sampling - can severely impair detector accuracy, with AUROC dropping from near-perfect levels to 1\% in some settings. Our findings expose critical blind spots in current detection methods and emphasize the need for more comprehensive evaluation protocols. To facilitate future research, we release a large-scale dataset encompassing 37 decoding configurations, along with our code and evaluation framework https://github.com/BaggerOfWords/Sampling-and-Detection